Search results for "Data interpretation"

showing 10 items of 195 documents

Diode laser spectroscopy of the nu(8) band of the SF(5)Cl molecule.

2003

Abstract Diode laser spectra of SF 5 Cl have been recorded in the ν 8 band region at a temperature of ca. 240 K, a pressure of 0.25 mbar and an instrumental bandwidth of ca. 0.001 cm −1 . Four regions have been studied: a first one in the P -branch (906.849–907.687 cm −1 ), a second one in the Q -branch (910.407–910.944 cm −1 ), and two other ones in the R -branch (913.957–914.556 and 917.853–918.705 cm −1 ). The whole ν 1 / ν 8 dyad of SF 5 35 Cl has been previously recorded in the group of Professor H. Burger in Wuppertal, thanks to a Fourier transform infrared spectrometer [J. Mol. Spectrosc. 208 (2001) 169]. These data have thus been combined with our diode laser ones in the aim of refi…

Sulfur CompoundsChemistryLasersFluorine CompoundsAnalytical chemistryInfrared spectroscopyLaserAtomic and Molecular Physics and OpticsSpectral lineAnalytical Chemistrylaw.inventionsymbols.namesakeFourier transformChlorideslawSpectrophotometryData Interpretation StatisticalsymbolsAtomic physicsFourier transform infrared spectroscopyGround stateHamiltonian (quantum mechanics)InstrumentationSpectroscopyDiodeSpectrochimica acta. Part A, Molecular and biomolecular spectroscopy
researchProduct

Deterministic chaos and the first positive Lyapunov exponent: a nonlinear analysis of the human electroencephalogram during sleep

1993

Under selected conditions, nonlinear dynamical systems, which can be described by deterministic models, are able to generate so-called deterministic chaos. In this case the dynamics show a sensitive dependence on initial conditions, which means that different states of a system, being arbitrarily close initially, will become macroscopically separated for sufficiently long times. In this sense, the unpredictability of the EEG might be a basic phenomenon of its chaotic character. Recent investigations of the dimensionality of EEG attractors in phase space have led to the assumption that the EEG can be regarded as a deterministic process which should not be mistaken for simple noise. The calcu…

AdultMaleGeneral Computer ScienceModels NeurologicalChaoticSystems TheoryLyapunov exponentsymbols.namesakeControl theoryAttractorHumansStatistical physicsMathematicsSleep StagesButterfly effectQuantitative Biology::Neurons and CognitionElectroencephalographyMiddle AgedNonlinear systemData Interpretation StatisticalPhase spaceQuasiperiodic functionsymbolsSleep StagesSleepCyberneticsBiotechnologyBiological Cybernetics
researchProduct

Calculation of NNTs in RCTs with time-to-event outcomes: A literature review

2008

Abstract Background The number needed to treat (NNT) is a well-known effect measure for reporting the results of clinical trials. In the case of time-to-event outcomes, the calculation of NNTs is more difficult than in the case of binary data. The frequency of using NNTs to report results of randomised controlled trials (RCT) investigating time-to-event outcomes and the adequacy of the applied calculation methods are unknown. Methods We searched in PubMed for RCTs with parallel group design and individual randomisation, published in four frequently cited journals between 2003 and 2005. We evaluated the type of outcome, the frequency of reporting NNTs with corresponding confidence intervals,…

medicine.medical_specialtyEpidemiologyMEDLINEHealth Informaticslaw.inventionRandomized controlled triallawStatisticsConfidence IntervalsHumansMedicineRandomized Controlled Trials as Topiclcsh:R5-920business.industryAbsolute risk reductionConsolidated Standards of Reporting TrialsConfidence intervalClinical trialSample size determinationData Interpretation StatisticalSample SizeNumber needed to treatPhysical therapylcsh:Medicine (General)businessResearch ArticleBMC Medical Research Methodology
researchProduct

Activity of O 6 -methylguanine DNA methyltransferase in mononuclear blood cells of formaldehyde-exposed medical students

1999

A recent study reported that exposure of student embalmers in Cincinnati to high concentrations of formaldehyde (2 mg/m3) reduced the activity of the DNA repair protein O6-methylguanine DNA methyltransferase (MGMT). Reduction in a DNA repair enzyme may strongly increase the cancer risk not only with respect to the repair-enzyme causing agent but with respect to all carcinogens causing lesions subject to repair by the enzyme in question. Thus, we examined whether formaldehyde exposure of 57 medical students during their anatomy course at two different Universities in Germany influenced MGMT activity in mononuclear blood cells. Mean formaldehyde exposure of 41 students was 0.2 +/- 0.05 mg/m3 …

AdultMalemedicine.medical_specialtyPathologyStudents MedicalTime FactorsMethyltransferaseAlcohol DrinkingDNA repairHealth Toxicology and MutagenesisFormaldehydeToxicologyPeripheral blood mononuclear cellDNA methyltransferaseFixativesO(6)-Methylguanine-DNA Methyltransferasechemistry.chemical_compoundFormaldehydeInternal medicineHypersensitivitymedicineHumansneoplasmsCarcinogenchemistry.chemical_classificationbusiness.industrySmokingEnvironmental ExposureGeneral MedicineEndocrinologyEnzymechemistryData Interpretation StatisticalToxicityLeukocytes MononuclearFemalebusinessArchives of Toxicology
researchProduct

Coupled variable selection for regression modeling of complex treatment patterns in a clinical cancer registry.

2013

For determining a manageable set of covariates potentially influential with respect to a time-to-event endpoint, Cox proportional hazards models can be combined with variable selection techniques, such as stepwise forward selection or backward elimination based on p-values, or regularized regression techniques such as component-wise boosting. Cox regression models have also been adapted for dealing with more complex event patterns, for example, for competing risks settings with separate, cause-specific hazard models for each event type, or for determining the prognostic effect pattern of a variable over different landmark times, with one conditional survival model for each landmark. Motivat…

Statistics and ProbabilityMaleNiacinamideBoosting (machine learning)Carcinoma HepatocellularEpidemiologyComputer scienceScoreFeature selectionAntineoplastic Agentscomputer.software_genreDecision Support TechniquesNeoplasmsCovariateHumansRegistriesAgedProportional Hazards ModelsProportional hazards modelPhenylurea CompoundsLiver NeoplasmsRegression analysisConfounding Factors EpidemiologicMiddle AgedSorafenibPrognosisRegressionCancer registryData Interpretation StatisticalRegression AnalysisData miningcomputerStatistics in medicine
researchProduct

Inferential tools in penalized logistic regression for small and sparse data: A comparative study.

2016

This paper focuses on inferential tools in the logistic regression model fitted by the Firth penalized likelihood. In this context, the Likelihood Ratio statistic is often reported to be the preferred choice as compared to the ‘traditional’ Wald statistic. In this work, we consider and discuss a wider range of test statistics, including the robust Wald, the Score, and the recently proposed Gradient statistic. We compare all these asymptotically equivalent statistics in terms of interval estimation and hypothesis testing via simulation experiments and analyses of two real datasets. We find out that the Likelihood Ratio statistic does not appear the best inferential device in the Firth penal…

Statistics and ProbabilityScore testPRESS statisticEpidemiologyStatistics as TopicScoreWald testLogistic regression01 natural sciences010104 statistics & probability03 medical and health sciences0302 clinical medicineHealth Information ManagementStatisticsEconometricsHumans030212 general & internal medicine0101 mathematicsStatisticMathematicsLikelihood FunctionsModels StatisticalLogistic regression firth penalized likelihood sandwich formula score statistic gradient statisticLogistic ModelsLikelihood-ratio testData Interpretation StatisticalSample SizeAncillary statisticSettore SECS-S/01 - StatisticaStatistical methods in medical research
researchProduct

Treating missing data in a clinical neuropsychological dataset--data imputation.

2001

Missing data frequently reduce the applicability of clinically collected data in research requiring multivariate statistics. In data imputation, missing values are replaced by predicted values obtained from models based on auxiliary information. Our aim was to complete a clinical child neuropsychological data set containing 5.2% of missing observations. This was to be used in research requiring multivariate statistics. We compared four data imputation methods by artificially deleting some data. A real-donor imputation method which preserved the parameter estimates and which predicted the observed values with acceptable accuracy was used to complete the data set. In addressing the lack of st…

MaleMultivariate statisticsNeuropsychologyNeuropsychological Testscomputer.software_genreMissing dataData setPsychiatry and Mental healthClinical PsychologyNeuropsychology and Physiological PsychologyArts and Humanities (miscellaneous)Data Interpretation StatisticalStatisticsDevelopmental and Educational PsychologyHumansFemaleData miningImputation (statistics)PsychologyChildCognition DisorderscomputerThe Clinical neuropsychologist
researchProduct

Weighted Least-Squares Likelihood Ratio Test for Branch Testing in Phylogenies Reconstructed from Distance Measures

2005

A variety of analytical methods is available for branch testing in distance-based phylogenies. However, these methods are rarely used, possibly because the estimation of some of their statistics, especially the covariances, is not always feasible. We show that these difficulties can be overcome if some simplifying assumptions are made, namely distance independence. The weighted least-squares likelihood ratio test (WLS-LRT) we propose is easy to perform, using only the distances and some of their associated variances. If no variances are known, the use of the Felsenstein F-test, also based on weighted least squares, is discussed. Using simulated data and a data set of 43 mammalian mitochondr…

MammalsLikelihood FunctionsModels GeneticReproducibility of ResultsGeneralized least squaresClassificationDNA MitochondrialDistance measuresEvolution MolecularData setData Interpretation StatisticalLikelihood-ratio testStatisticsHIV-1GeneticsAnimalsCluster AnalysisPoint (geometry)PhylogenyEcology Evolution Behavior and SystematicsIndependence (probability theory)Reliability (statistics)Selection (genetic algorithm)MathematicsSystematic Biology
researchProduct

Efficient estimation of generalized linear latent variable models.

2019

Generalized linear latent variable models (GLLVM) are popular tools for modeling multivariate, correlated responses. Such data are often encountered, for instance, in ecological studies, where presence-absences, counts, or biomass of interacting species are collected from a set of sites. Until very recently, the main challenge in fitting GLLVMs has been the lack of computationally efficient estimation methods. For likelihood based estimation, several closed form approximations for the marginal likelihood of GLLVMs have been proposed, but their efficient implementations have been lacking in the literature. To fill this gap, we show in this paper how to obtain computationally convenient estim…

0106 biological sciencesMultivariate statisticsMultivariate analysisComputer scienceBinomials01 natural sciencesPolynomials010104 statistics & probabilityAmoebastilastolliset mallitestimointiProtozoansLikelihood FunctionsMultidisciplinaryApproximation MethodsStatistical ModelsSimulation and ModelingApplied MathematicsStatisticsQLinear modelREukaryotaLaplace's methodData Interpretation StatisticalPhysical SciencesVertebratesMedicineAlgorithmAlgorithmsResearch ArticleOptimizationScienceLatent variableResearch and Analysis Methods010603 evolutionary biologygeneralized linear latent variable modelsSet (abstract data type)BirdsAnimalsComputer Simulation0101 mathematicsta112OrganismsBiology and Life SciencesStatistical modelMarginal likelihoodAlgebraAmniotesMultivariate AnalysisLinear ModelsMathematicsSoftwarePLoS ONE
researchProduct

Stochastic Nonlinear Time Series Forecasting Using Time-Delay Reservoir Computers: Performance and Universality

2014

International audience; Reservoir computing is a recently introduced machine learning paradigm that has already shown excellent performances in the processing of empirical data. We study a particular kind of reservoir computers called time-delay reservoirs that are constructed out of the sampling of the solution of a time-delay diFFerential equation and show their good performance in the forecasting of the conditional covariances associated to multivariate discrete-time nonlinear stochastic processes of VEC-GARCH type as well as in the prediction of factual daily market realized volatilities computed with intraday quotes, using as training input daily log-return series of moderate size. We …

Multivariate statisticsMathematical optimizationTime FactorsRealized varianceDifferential equationComputer scienceCognitive NeuroscienceMathematicsofComputing_NUMERICALANALYSIS02 engineering and technologyComputer Communication NetworksArtificial Intelligence0502 economics and business0202 electrical engineering electronic engineering information engineeringHumansTime seriesSimulation050205 econometrics Stochastic Processes[PHYS.PHYS.PHYS-OPTICS]Physics [physics]/Physics [physics]/Optics [physics.optics]Series (mathematics)Artificial neural networkComputersStochastic process05 social sciencesReservoir computingSampling (statistics)Universality (dynamical systems)Nonlinear systemNonlinear DynamicsData Interpretation Statistical020201 artificial intelligence & image processingNeural Networks ComputerForecastingSSRN Electronic Journal
researchProduct